Search Results: "luk"

30 August 2017

Kees Cook: GRUB and LUKS

I got myself stuck yesterday with GRUB running from an ext4 /boot/grub, but with /boot inside my LUKS LVM root partition, which meant GRUB couldn t load the initramfs and kernel. Luckily, it turns out that GRUB does know how to mount LUKS volumes (and LVM volumes), but all the instructions I could find talk about setting this up ahead of time ( Add GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub ), rather than what the correct manual GRUB commands are to get things running on a failed boot. These are my notes on that, in case I ever need to do this again, since there was one specific gotcha with using GRUB s cryptomount command (noted below). Available devices were the raw disk (hd0), the /boot/grub partition (hd0,msdos1), and the LUKS volume (hd0,msdos5):
grub> ls
(hd0) (hd0,msdos1) (hd0,msdos5)
Used cryptomount to open the LUKS volume (but without ()s! It says it works if you use parens, but then you can t use the resulting (crypto0)):
grub> insmod luks
grub> cryptomount hd0,msdos5
Enter password...
Slot 0 opened.
Then you can load LVM and it ll see inside the LUKS volume:
grub> insmod lvm
grub> ls
(crypto0) (hd0) (hd0,msdos1) (hd0,msdos5) (lvm/rootvg-rootlv)
And then I could boot normally:
grub> configfile $prefix/grub.cfg
After booting, I added GRUB_ENABLE_CRYPTODISK=y to /etc/default/grub and ran update-grub. I could boot normally after that, though I d be prompted twice for the LUKS passphrase (once by GRUB, then again by the initramfs). To avoid this, it s possible to add a second LUKS passphrase, contained in a file in the initramfs, as described here and works for Ubuntu and Debian too. The quick summary is: Create the keyfile and add it to LUKS:
# dd bs=512 count=4 if=/dev/urandom of=/crypto_keyfile.bin
# chmod 0400 /crypto_keyfile.bin
# cryptsetup luksAddKey /dev/sda5 /crypto_keyfile.bin
*enter original password*
Adjust the /etc/crypttab to include passing the file via /bin/cat:
sda5_crypt UUID=4aa5da72-8da6-11e7-8ac9-001cc008534d /crypto_keyfile.bin luks,keyscript=/bin/cat
Add an initramfs hook to copy the key file into the initramfs, keep non-root users from being able to read your initramfs, and trigger a rebuild:
# cat > /etc/initramfs-tools/hooks/crypto_keyfile <<EOF
#!/bin/bash
if [ "$1" = "prereqs" ] ; then
    cp /crypto_keyfile.bin "$ DESTDIR "
fi
EOF
# chmod a+x /etc/initramfs-tools/hooks/crypto_keyfile
# chmod 0700 /boot
# update-initramfs -u
This has the downside of leaving a LUKS passphrase in the clear while you re booted, but if someone has root, they can just get your dm-crypt encryption key directly anyway:
# dmsetup table --showkeys sda5_crypt
0 155797496 crypt aes-cbc-essiv:sha256 e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 8:5 2056
And of course if you re worried about Evil Maid attacks, you ll need a real static root of trust instead of doing full disk encryption passphrase prompting from an unverified /boot partition. :)

2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

17 August 2017

Reproducible builds folks: Reproducible Builds: Weekly report #120

Here's what happened in the Reproducible Builds effort between Sunday 6th and Saturday 12th August 2017: Notes about reviews of unreproducible packages 13 package reviews have been added, 7 have been updated and 34 have been removed in this week, adding to our knowledge about identified issues. Packages reviewed and fixed, and reproducibility related bugs filed Upstream packages: Other bugs filed diffoscope development trydiffoscope development tests.reproducible-builds.org Misc. This week's edition was written by Chris Lamb & Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

5 August 2017

Lars Wirzenius: Enabling TRIM/DISCARD on Debian, ext4, luks, and lvm

I realised recently that my laptop isn't set up to send TRIM or DISCARD commands to its SSD. That means the SSD firmware has a harder time doing garbage collection (see whe linked Wikipedia page for more details.) After some searching, I found two articles by Christopher Smart: one, update. Those, plus some addition reading of documentation, and a little experimentation, allowed me to do this. Since the information is a bit scattered, here's the details, for Debian stretch, as much for my own memory as to make sure this is collected into one place. Note that it seems to be a possible information leak to TRIM encryped devices. I don't know the details, but if that bothers you, don't do it. I don't know of any harmful effects for enabling TRIM for everything, except the crypto bit above, so I wonder if it wouldn't make sense for the Debian installer to do this by default.

18 July 2017

Matthew Garrett: Avoiding TPM PCR fragility using Secure Boot

In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comment count unavailable comments

23 May 2017

Reproducible builds folks: Reproducible Builds: week 108 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 14 and Saturday May 20 2017: News and Media coverage IRC meeting Our next IRC meeting has been scheduled for Thursday June 1 at 16:00 UTC. Packages reviewed and fixed, bugs filed, etc. Bernhard M. Wiedemann: Chris Lamb: Reviews of unreproducible packages 35 package reviews have been added, 28 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added: diffoscope development strip-nondeterminism development tests.reproducible-builds.org Holger wrote a new systemd-based scheduling system replacing 162 constantly running Jenkins jobs which were slowing down job execution in general: Misc. This week's edition was written by Chris Lamb, Holver Levsen, Bernhard M. Wiedemann, Vagrant Cascadian and Maria Glukhova & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

17 May 2017

Reproducible builds folks: Reproducible Builds: week 107 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 7 and Saturday May 13 2017: Report from Reproducible Builds Hamburg Hackathon We were 16 participants from 12 projects: 7 Debian, 2 repeatr.io, 1 ArchLinux, 1 coreboot + LEDE, 1 F-Droid, 1 ElectroBSD + privoxy, 1 GNU R, 1 in-toto.io, 1 Meson and 1 openSUSE. Three people came from the USA, 3 from the UK, 2 Finland, 1 Austria, 1 Denmark and 6 from Germany, plus we several guests from our gracious hosts at the CCCHH hackerspace as well as a guest from Australia We had four presentations: Some of the things we worked on: We had a Debian focussed meeting where we discussed a number of topics: And then we also had a lot of fun in the hackerspace, enjoying some of their gimmicks, such as being able to open physical doors with ssh or controlling light and music with an webbrowser without authentication (besides being in the right network). Not quite the hackathon (This wasn't the hackathon per-se, but some of us appreciated these sights and so we thought you would too.) Many thanks to: News and media coverage openSUSE has had a security breach in their infrastructure, including their build services. As of this writing, the scope and impact are still unclear, however the incident illustrates that no one should rely on being able to secure their infrastructure at all times. Reproducible Builds help mitigate this by allowing independent verification of build results, by parties that are unaffected by the compromise. (Whilst this can happen to anyone. Kudos to openSUSE for being open about it. Now let's continue working on Reproducible Builds everywhere!) On May 13th Chris Lamb gave a talk on Reproducible Builds at OSCAL 2017 in Tirana, Albania. OSCAL 2017 Toolchain bug reports and fixes Packages' bug reports Reviews of unreproducible packages 11 package reviews have been added, 2562 have been updated and 278 have been removed in this week, adding to our knowledge about identified issues. Most of the updates were to move ~1800 packages affected by the generic catch-all captures_build_path (out of ~2600 total) to the more specific gcc_captures_build_path, fixed by our proposed patches to GCC. 5 issue types have been updated: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development diffoscope development continued on the experimental branch: strip-nondeterminism development reprotest development trydiffoscope development Misc. This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

16 May 2017

Francois Marier: Recovering from an unbootable Ubuntu encrypted LVM root partition

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty). After showing the boot screen for about 30 seconds, a busybox shell pops up:
BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
Enter 'help' for list of built-in commands.
(initramfs)
Typing exit will display more information about the failure before bringing us back to the same busybox shell:
Gave up waiting for root device. Common problems:
  - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
    - Check root= (did the system wait for the right device?)
  - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 
BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
Enter 'help' for list of built-in commands.  
(initramfs)
which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found. There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem.

Boot using a USB installation disk First, create bootable USB disk using the latest Ubuntu installer:
  1. Download an desktop image.
  2. Copy the ISO directly on the USB stick (overwriting it in the process):
     dd if=ubuntu.iso of=/dev/sdc1
    
and boot the system using that USB stick (hold the option key during boot on Apple hardware).

Mount the encrypted partition Assuming a drive which is partitioned this way:
  • /dev/sda1: EFI partition
  • /dev/sda2: unencrypted boot partition
  • /dev/sda3: encrypted LVM partition
Open a terminal and mount the required partitions:
cryptsetup luksOpen /dev/sda3 sda3_crypt
vgchange -ay
mount /dev/mapper/ubuntu--vg-root /mnt
mount /dev/sda2 /mnt/boot
mount -t proc proc /mnt/proc
mount -o bind /dev /mnt/dev
Note:
  • When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example).
  • All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them.

Regenerate the initramfs on the boot partition Then "enter" the root partition using:
chroot /mnt
and make sure that the lvm2 package is installed:
apt install lvm2
before regenerating the initramfs for all of the installed kernels:
update-initramfs -c -k all

1 April 2017

Francois Marier: Manually expanding a RAID1 array on Ubuntu

Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine. My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive).

Partition the new drive In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive (/dev/sdc) to use up all of the capacity on that drive:
$ parted /dev/sdc
unit s
print
mkpart
print
Then I initialized the partition table and creating the EFI partition partition on the new drive (/dev/sdd):
$ parted /dev/sdd
unit s
mktable gpt
mkpart
Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition (/home) on the new 3 TB drive had:
  • the same start position as the second partition on the old drive
  • the end position of the third partition (the temporary one I just created) on the old drive
I created the partition and flagged it as a RAID one:
mkpart
toggle 2 raid
and then deleted the temporary partition on the old 2 TB drive:
$ parted /dev/sdc
print
rm 3
print

Create a temporary RAID1 array on the new drive With the new drive properly partitioned, I created a new RAID array for it:
mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing
and added it to /etc/mdadm/mdadm.conf:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
which required manual editing of that file to remove duplicate entries.

Create the encrypted partition With the new RAID device in place, I created the encrypted LUKS partition:
cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10
cryptsetup luksOpen /dev/md10 chome2
I took the UUID for the temporary RAID partition:
blkid /dev/md10
and put it in /etc/crypttab as chome2. Then, I formatted the new LUKS partition and mounted it:
mkfs.ext4 -m 0 /dev/mapper/chome2
mkdir /home2
mount /dev/mapper/chome2 /home2

Copy the data from the old drive With the home paritions of both drives mounted, I copied the files over to the new drive:
eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/
making use of wrappers that preserve system reponsiveness during I/O-intensive operations.

Switch over to the new drive After the copy, I switched over to the new drive in a step-by-step way:
  1. Changed the UUID of chome in /etc/crypttab.
  2. Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf.
  3. Rebooted with both drives.
  4. Checked that the new drive was the one used in the encrypted /home mount using: df -h.

Add the old drive to the new RAID array With all of this working, it was time to clear the mdadm superblock from the old drive:
mdadm --zero-superblock /dev/sdc1
and then change the second partition of the old drive to make it the same size as the one on the new drive:
$ parted /dev/sdc
rm 2
mkpart
toggle 2 raid
print
before adding it to the new array:
mdadm /dev/md1 -a /dev/sdc1

Rename the new array To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays:
umount /home
cryptsetup luksClose chome
mdadm --stop /dev/md10
mdadm --stop /dev/md1
before running this command:
mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2
and updating the name in /etc/mdadm/mdadm.conf. The last step was to regenerate the initramfs:
update-initramfs -u
before rebooting into something that looks exactly like the original RAID1 array but with twice the size.

21 March 2017

Reproducible builds folks: Reproducible Builds: week 99 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday March 12 and Saturday March 18 2017: Upcoming events Reproducible Builds Hackathon Hamburg 2017 The Reproducible Builds Hamburg Hackathon 2017, or RB-HH-2017 for short is a 3 day hacking event taking place May 5th-7th in the CCC Hamburg Hackerspace located inside Frappant, as collective art space located in a historical monument in Hamburg, Germany. The aim of the hackathon is to spent some days working on Reproducible Builds in every distribution and project. The event is open to anybody interested on working on Reproducible Builds issues, with or without prior experience! Accomodation is available and travel sponsorship may be available by agreement. Please register your interest as soon as possible. Reproducible Builds Summit Berlin 2016 This is just a quick note, that all the pads we've written during the Berlin summit in December 2016 are now online (thanks to Holger), nicely complementing the report by Aspiration Tech. Request For Comments for new specification: BUILD_PATH_PREFIX_MAP Ximin Luo posted a draft version of our BUILD_PATH_PREFIX_MAP specification for passing build-time paths between high-level and low-level build tools. This is meant to help eliminate irreproducibility caused by different paths being used at build time. At the time of writing, this affects an estimated 15-20% of 25000 Debian packages. This is a continuation of an older proposal SOURCE_PREFIX_MAP, which has been updated based on feedback on our patches from GCC upstream, attendees of our Berlin 2016 summit, and participants on our mailing list. Thanks to everyone that contributed! The specification also contains runnable source code examples and test cases; see our git repo. Please comment on this draft ASAP - we plan to release version 1.0 of this in a few weeks. Toolchain changes Packages reviewed and fixed, and bugs filed Chris Lamb: Reviews of unreproducible packages 5 package reviews have been added, 274 have been updated and 800 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been added: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development diffoscope 79 and 80 were uploaded to experimental by Chris Lamb. It included contributions from: Chris Lamb: Maria Glukhova: strip-nondeterminism development strip-nondeterminism 0.032-1 was uploaded to unstable by Chris Lamb. It included contributions from: Chris Lamb: tests.reproducible-builds.org Misc. This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

21 February 2017

Shirish Agarwal: The Indian elections hungama

a person showing s(he) showing s(he) Before I start, I would like to point out #855549 . This is a normal/wishlist bug I have filed against apt, the command-line package manager. I sincerely believe having a history command to know what packages were installed, which were upgraded, which were purged should be easily accessible, easily understood and if the output looks pretty, so much the better. Of particular interest to me is having a list of new packages I have installed in last couple of years after jessie became the stable release. It probably would make for some interesting reading. I dunno how much efforts would be to code something like that, but if it works, it would be the greatest. Apt would have finally arrived. Not that it s a bad tool, it s just that it would then make for a heck of a useful tool. Coming back to the topic on hand, Now for the last couple of weeks we don t have water or rather pressure of water. Water crisis has been hitting Pune every year since 2014 with no end in sight. This has been reported in newspapers addendum but it seems it has been felling on deaf ears. The end result of it is that I have to bring buckets of water from around 50 odd metres. It s not a big thing, it s not like some women in some villages in Rajasthan who have to walk in between 200 metres to 5 odd kilometres to get potable water or Darfur, Western Sudan where women are often kidnapped and sold as sexual slaves when they get to fetch water. The situation in Darfur has been shown quite vividly in Darfur is Dying . It is possible that I may have mentioned about Darfur before. While unfortunately the game is in flash as a web resource, the most disturbing part is that the game is extremely depressing, there is a no-win scenario. So knowing and seeing both those scenarios, I can t complain about 50 metres. BUT .but when you extrapolate the same data over some more or less 3.3-3.4 million citizens, 3.1 million during 2011 census with a conservative 2.3-2.4 percent population growth rate according to scroll.in. Fortunately or unfortunately, Pune Municipal Corporation elections were held today. Fortunately or unfortunately, this time all the political parties bought majorly unknown faces in these elections. For e.g. I belong to ward 14 which is spread over quite a bit of area and has around 10k of registered voters. Now the unfortunate part of having new faces in elections, you don t know anything about them. Apart from the affidavits filed, the only thing I come to know is whether there are criminal cases filed against them and what they have shown as their wealth. While I am and should be thankful to ADR which actually is the force behind having the collated data made public. There is a lot of untold story about political push-back by all the major national and regional political parties even when this bit of news were to be made public. It took major part of a decade for such information to come into public domain. But for my purpose of getting clean air and water supply 24 7 to each household seems a very distant dream. I tried to connect with the corporators about a week before the contest and almost all of the lower party functionaries hid behind their political parties manifestos stating they would do the best without any viable plan. For those not knowing, India has been blessed with 6 odd national parties and about 36 odd regional parties and every election some 20-25 new parties try their luck every time. The problem is we, the public, don t trust them or their manifestos. First of all the political parties themselves engage in mud-slinging as to who s copying whom with the manifesto.Even if a political party wins the elections, there is no *real* pressure for them to follow their own manifesto. This has been going for many a year. OF course, we the citizens are to also blame as most citizens for one reason or other chose to remain aloof of the process. I scanned/leafed through all the manifestos and all of them have the vague-wording we will make Pune tanker-free without any implementation details. While I was unable to meet the soon-to-be-Corporators, I did manage to meet a few of the assistants but all the meetings were entirely fruitless. Diagram of Rain Water Harvesting I asked why can t the city follow the Chennai model. Chennai, not so long ago was at the same place where Pune is, especially in relation to water. What happened next, in 2001 has been beautifully chronicled in Hindustan Times . What has not been shared in that story is that the idea was actually fielded by one of Chennai Mayor s assistants, an IAS Officer, I have forgotten her name, Thankfully, her advise/idea was taken to heart by the political establishment and they drove RWH. Saying why we can t do something similar in Pune, I heard all kinds of excuses. The worst and most used being Marathas can never unite which I think is pure bullshit. For people unfamiliar to the term, Marathas was a warrior clan in Shivaji s army. Shivaji, the king of Marathas were/are an expert tactician and master of guerilla warfare. It is due to the valor of Marathas, that we still have the Maratha Light Infantry a proud member of the Indian army. Why I said bullshit was the composition of people living in Maharashtra has changed over the decades. While at one time both the Brahmins and the Marathas had considerable political and population numbers, that has changed drastically. Maharashtra and more pointedly, Mumbai, Pune and Nagpur have become immigrant centres. Why just a decade back, Shiv Sena, an ultra right-wing political party used to play the Maratha card at each and every election and heckle people coming from Uttar Pradesh and Bihar, this has been documented as the 2008 immigrants attacks and 9 years later we see Shiv Sena trying to field its candidates in Uttar Pradesh. So, obviously they cannot use the same tactics which they could at one point of time. One more reason I call it bullshit, is it s a very lame excuse. When the Prime Minister of the country calls for demonetization which affects 1.25 billion people, people die, people stand in queues and is largely peaceful, I do not see people resisting if they bring a good scheme. I almost forgot, as an added sweetener, the Chennai municipality said that if you do RWH and show photos and certificates of the job, you won t have to pay as much property tax as otherwise you would, that also boosted people s participation. And that is not the only solution, one more solution has been outlined in Aaj Bhi Khade hain talaab written by just-deceased Gandhian environmental activist Anupam Mishra. His Book can be downloaded for free at India Water Portal . Unfortunately, the said book doesn t have a good English translation till date. Interestingly, all of his content is licensed under public domain (CC-0) so people can continue to enjoy and learn from his life-work. Another lesson or understanding could be taken from Israel, the father of the modern micro-drip irrigation for crops. One of the things on my bucket lists is to visit Israel and if possible learn how they went from a water-deficient country to a water-surplus one. India labor Which brings me to my second conundrum, most of the people believe that it s the Government s job to provide jobs to its people. India has been experiencing jobless growth for around a decade now, since the 2008 meltdown. While India was lucky to escape that, most of its trading partners weren t hence it slowed down International trade which slowed down creation of new enterprises etc. Laws such as the Bankruptcy law and the upcoming Goods and Services Tax . As everybody else, am a bit excited and a bit apprehensive about how the actual implementation will take place. null Even International businesses has been found wanting. The latest example has been Uber and Ola. There have been protests against the two cab/taxi aggregators operating in India. For the millions of jobless students coming out of schools and Universities, there aren t simply enough jobs for them, nor are most (okay 50%) of them qualified for the jobs, these 50 percent are also untrainable, so what to do ? In reality, this is what keeps me awake at night. India is sitting on this ticking bomb-shell. It is really, a miracle that the youths have not rebelled yet. While all the conditions, proposals and counter-proposals have been shared before, I wanted/needed to highlight it. While the issue seems to be local, I would assert that they are all glocal in nature. The questions we are facing, I m sure both developing and to some extent even developed countries have probably been affected by it. I look forward to know what I can learn from them. Update 23/02/17 I had wanted to share about Debian s Voting system a bit, but that got derailed. Hence in order not to do, I ll just point towards 2015 platforms where 3 people vied for DPL post. I *think* I shared about DPL voting process earlier but if not, would do in detail in some future blog post.
Filed under: Miscellenous Tagged: #Anupam Mishra, #Bankruptcy law, #Chennai model, #clean air, #clean water, #elections, #GST, #immigrant, #immigrants, #Maratha, #Maratha Light Infantry, #migration, #national parties, #Political party manifesto, #regional parties, #ride-sharing, #water availability, Rain Water Harvesting

15 February 2017

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox. You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos. If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox). You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos). If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

5 February 2017

Bits from Debian: Debian welcomes its Outreachy interns

Outreachy logo Better late than never, we'd like to welcome our three Outreachy interns for this round, lasting from the 6th of December 2016 to the 6th of March 2017. Elizabeth Ferdman is working in the Clean Room for PGP and X.509 (PKI) Key Management. Maria Glukhova is working in Reproducible builds for Debian and free software. Urvika Gola is working in improving voice, video and chat communication with free software. From the official website: Outreachy helps people from groups underrepresented in free and open source software get involved. We provide a supportive community for beginning to contribute any time throughout the year and offer focused internship opportunities twice a year with a number of free software organizations. The Outreachy program is possible in Debian thanks to the effort of Debian developers and contributors that dedicate part of their free time to mentor students and outreach tasks, and the help of the Software Freedom Conservancy, who provides administrative support for Outreachy, as well as the continued support of Debian's donors, who provide funding for the internships. Debian will also participate in the next round for Outreachy, during the summer of 2017. More details will follow in the next weeks. Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list. Congratulations, Elizabeth, Maria and Urvika!

29 January 2017

Reproducible builds folks: Reproducible Builds: week 91 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 15 and Saturday January 21 2017: Media Coverage Upcoming Events Toolchain development and fixes Ximin Luo continued work on data formats, code, and test cases for SOURCE_PREFIX_MAP. He also continued to talk with the rustc team on the topic. Chris Lamb submitted a patch to implement SOURCE_DATE_EPOCH for wordwarvi, a game which gave extra points to people who built it from source within one hour. This fixes Debian #786593. Launchpad bug 1657704 was filed for them to start accepting buildinfo files. Bugs filed Reviews of unreproducible packages 10 package reviews have been added, 149 have been updated and 153 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been updated: Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: diffoscope development diffoscope 69 was uploaded to unstable by Chris Lamb. It included contributions from: Further development continued in Git, and will be released as version 70 next week: reproducible-builds.org website development tests.reproducible-builds.org Misc. This week's edition was written by Ximin Luo, Vagrant Cascadian, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

27 January 2017

Iustin Pop: Languages, part 1

Human languages, part 1 I do enjoy writing blog posts, but sometimes time is lacking, other times inspiration. As I was eating dinner today, I was lost in thoughts and my eyes stopped on the documentation from a certain medication for coughing (the thing almost noone reads, like EULAs). I was surprised quite surprised with te text that I said well, some blog posts on languages might be interesting . Background: My mother tongue is Romanian. While growing up and learning foreign languages, I considered only the utilitarian aspect of languages, but as I get older (not old, older! ), I find human languages more and more interesting. Back to the subject: this being Switzerland, most everything is written in German, French and Italian (in this order of frequency). German is still a foreign language to me (let's say I get by, but not nicely), French is the fancy high class cousin, and Italian Well Italian is a special case. When I first saw Italian on TV (a newscast while travelling in Italy) I was shocked at how much one can understand without learning Italian in any way (much, much more than German after years of trying). So Italian language is quite close, and usually one understands a third to half; this is both spoken (less in common language, more in official language) and in writing. In this particular case, the instructions of use say (sorry for typos, manually copying):
In casi rarissimi possono manifestarsi reazioni di ipersensibilit grave con tumefazione del viso, difficolt respiratoria (dispnea) e diminuzione della pressione arteriosa.
What was surprising here was not the list of side effects (hah), but that this short phrase is 98% identical to the translation in Romanian; not only words, but also phrase structure. I don't think I've ever seen this before:
n cazuri rare se pot manifesta reac ii de hipersensibilitate grav cu tumefac ie a fe ei, dificultate respiratorie (dispnee) i diminuare a presiunii arteriale.
Not all the words are identical, but even the one that is obviously different (it. 'del viso', ro. 'a fe ei') is easily translatable as 'visiune' in Romanian means 'to see', so the link is clear. This phrase structure is also quite a natural way to say the things in Romanian. I was then curious to see the French version, which is:
Dans de tr s rares cas, [the medicine] peut d clencher de violentes r actions d'hypersensibilit s'accompagnant d'un gonflement du visage, de d tresse respiratorie e d'une chute de tension.
French is usually quite different from Romanian that one has to learn it (for quite a while, especially for grammar) in order to be proficient in it, but here you can make also word-by-word translation (transposition?) that doesn't lose the meaning:
n cazuri tare rari, [the medicime] poate declan a violente reac ii de hypersensibilitate acompaniate de o umflare a fe ei, de ???? respiratorie i de ???? a tensiunii.
Basically here we have two non-equivalent words, and a bit more wierd phrase structure it sounds more like coloquial speech than written language but for French is also surprisingly close. You'd invert some of the adjective-noun pairs (fr. 'violentes r actions' is understandable in Romanian as 'violente reac ii', but it sounds very poetical and you'd usually write it as 'reac ii violente'). The next phrase is no longer that similar, but the one after is again obviously identical:
Se osserva effetti collaterali qui non descritti, dovrebbe informare il suo medico, il suo farmacista o il suo drogerie.
which is:
[Dac /If] se observ effecte colaterale care nu sunt descrise, trebuie informat medicul vostru, farmacistul vostru sau [xxxx - no real equivalent].
And the French is also identical, modulo again 'droguiste':
Si vouz remarquez de effets secondaires qui ne sont pas mentionn s dans cette notice, veuillez en informer votre m decin, votre pharmacien ou votre droguiste.
which is:
Dac remarca i efecte secundare care nu sunt men ionate n aceast not , informa i medicul vostru, farmacistul vostru sau al vostru [xxxx].
This is even closer; 'votre' is more similar to '[al] vostru' than 'suo', and the phrase structure is much more natural - this is exactly how you'd write it in 'native' Romanian, whereas the Italian is not (I had to add the 'if' to make it parseable). The 'vous' in 'vouz remarquez' is 'voi' in Romanian, but doesn't need to be added as it would be redundant; but it doesn't confuse the phrase. The 'veuillez en informer' doesn't have a 1:1 translation (it would be written as 'v rug m s informa i'), but is still understandable; a false friend translation would be 'vede i s informa i'/see to inform/voir informer. Why is this all surprising? Because Romanian has a significant amount of words of Slavic origin (~11% in overall vocabulary, 15% in most commonly used 2500 words) and some from other nearby countries (Turkey, Greek, some Hungarian and German). At a stretch, it's even possible to write simple but complete sentences entirely with words from Slavic origin, as I learned from this interesting youtube video. Also, our accent is almost always confused with Russian, not with Italian or French. So to get to a summary: normally you see sentence elements that are similar or identical, but not entire sentences, and definitely not phrases. What made the three languages here keep, in this particular case, not only similar but almost identical words and also almost identical phrase structure? Is it the subject (medicine)? Maybe. Is it a random fluke? If so, I don't remember seeing it before. Do I just see similarities where there are none? Possibly , but at least I thought it worth mentioning; it was quite surprising to me. Did my brain get confused by too many languages and I misinterpreted words that don't really exist (e.g. I was sure that 'vizajul' is a Romanian word, but upon checking, it isn't )? Also possible. In any case, for me it was a good subject for a blog post. Now let's not go near Spanish and definitely not near Portuguese

18 January 2017

Reproducible builds folks: Reproducible Builds: week 90 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 8 and Saturday January 14 2017: Upcoming Events Reproducible work in other projects Reproducible Builds have been mentioned in the FSF high-priority project list. The F-Droid Verification Server has been launched. It rebuilds apps from source that were built by f-droid.org and checks that the results match. Bernhard M. Wiedemann did some more work on reproducibility for openSUSE. Bootstrappable.org (unfortunately no HTTPS yet) was launched after the initial work was started at our recent summit in Berlin. This is another topic related to reproducible builds and both will be needed in order to perform "Diverse Double Compilation" in practice in the future. Toolchain development and fixes Ximin Luo researched data formats for SOURCE_PREFIX_MAP and explored different options for encoding a map data structure in a single environment variable. He also continued to talk with the rustc team on the topic. Daniel Shahaf filed #851225 ('udd: patches: index by DEP-3 "Forwarded" status') to make it easier to track our patches. Chris Lamb forwarded #849972 upstream to yard, a Ruby documentation generator. Upstream has fixed the issue as of release 0.9.6. Alexander Couzens (lynxis) has made mksquashfs reproducible and is looking for testers. It compiles on BSD systems such as FreeBSD, OpenBSD and NetBSD. Bugs filed Chris Lamb: Lucas Nussbaum: Nicola Corna: Reviews of unreproducible packages 13 package reviews have been added and 13 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been added: Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: diffoscope development Bugs in diffoscope in the last year Many bugs were opened in diffoscope during the past few weeks, which probably is a good sign as it shows that diffoscope is much more widely used than a year ago. We have been working hard to squash many of them in time for Debian stable, though we will see how that goes in the end reproducible-website development tests.reproducible-builds.org Misc. This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

16 January 2017

Maria Glukhova: APK, images and other stuff.

2 more weeks of my awesome Outreachy journey have passed, so it is time to make an update on my progress. I continued my work on improving diffoscope by fixing bugs and completing wishlist items. These include:

Improving APK support I worked on #850501 and #850502 to improve the way diffoscope handles APK files. Thanks to Emanuel Bronshtein for providing clear description on how to reproduce these bugs and ideas on how to fix them. And special thanks to Chris Lamb for insisting on providing tests for these changes! That part actually proved to be little more tricky, and I managed to mess up with these tests (extra thanks to Chris for cleaning up the mess I created). Hope that also means I learned something from my mistakes. Also, I was pleased to see F-droid Verification Server as a sign of F-droid progress on reproducible builds effort - I hope these changes to diffoscope will help them!

Adding support for image metadata That came from #849395 - a request was made to compare image metadata along with image content. Diffoscope has support for three types of images: JPEG, MS Windows Icon (*.ico) and PNG. Among these, PNG already had good image metadata support thanks to sng tool, so I worked on .jpeg and .ico files support. I initially tried to use exiftool for extracting metadata, but then I discovered it does not handle .ico files, so I decided to use a bigger force - ImageMagick s identify - for this task. I was glad to see it had that handy -format option I could use to select only the necessary fields (I found their -verbose, well, too verbose for the task) and presenting them in the defined form, negating the need of filtering its output. What was particulary interesting and important for me in terms of learning: while working on this feature, I discovered that, at the moment, diffoscope could not handle .ico files at all - img2txt tool, that was used for retrieving image content, did not support that type of images. But instead of recognizing this as a bug and resolving it, I started to think of possible workaround, allowing for retrieving image metadata even after retrieving image content failed. Definetely not very good thinking. Thanks Mattia Rizzolo for actually recognizing this as a bug and filing it, and Chris Lamb for fixing it!

Other work

Order-like differences, part 2 In the previous post, I mentioned Lunar s suggestion to use hashing for finding order-like difference in wide variety of input data. I implemented that idea, but after discussion with my mentor, we decided it is probably not worth it - this change would alter quite a lot of things in core modules of diffoscope, and the gain would be not really significant. Still, implementing that was an important experience for me, as I had to hack on deepest and, arguably, most difficult modules of diffoscope and gained some insight on how they work.

Comparing with several tools (work in progress) Although my initial motivation for this idea was flawed (the workaround I mentioned earlier for .ico files), it still might be useful to have a mechanism that would allow to run several commands for finding difference, and then give the output of those that succeed, failing if and only if they all have failed. One possible case when it might happen is when we use commands coming from different tools, and one of them is not installed. It would be nice if we still used the other and not the uninformative binary diff (that is a default fallback option for when something goes wrong with more clever comparison). I am still in process of polishing this change, though, and still in doubt if it is needed at all.

Side note - Outreachy and my university progress In my Outreachy application, I promised that if I am selected into this round, I will do everything I can to unload the required time period from my university time commitements. I did that by moving most of my courses to the first half of the academic year. Now, the main thing that is left for me to do is my Master s thesis. I consulted my scientific advisors from both universities that I am formally attending (SFEDU and LUT - I am in double degree program), and as a result, they agreed to change my Master s thesis topic to match my Outreachy work. Now, that should have sounded like an excellent news - merging these activities together actually mean I can allocate much more time to my work on reproducible builds, even beyond the actual internship time period. That was intended to remove a burden from my shoulders. Still, I feel a bit uneasy. The drawback of this decision lies in fact I have no idea on how to write scientific report based on pure practical work. I know other students from my universities have done such things before, but choosing my own topic means my scientific advisors can t help me much - this is just out of their area of expertise. Well, wish me luck - I m up to the challenge!

11 January 2017

Reproducible builds folks: Reproducible Builds: week 89 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 1 and Saturday January 7 2017: GSoC and Outreachy updates Toolchain development Packages reviewed and fixed, and bugs filed Chris Lamb: Dhole: Reviews of unreproducible packages 13 package reviews have been added, 4 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added/updated: Upstreaming of reproducibility fixes Merged: Opened: Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: diffoscope development diffoscope 67 was uploaded to unstable by Chris Lamb. It included contributions from :
[ Chris Lamb ]
* Optimisations:
  - Avoid multiple iterations over archive by unpacking once for an ~8X
    runtime optimisation.
  - Avoid unnecessary splitting and interpolating for a ~20X optimisation
    when writing --text output.
  - Avoid expensive diff regex parsing until we need it, speeding up diff
    parsing by 2X.
  - Alias expensive Config() in diff parsing lookup for a 10% optimisation.
* Progress bar:
  - Show filenames, ELF sections, etc. in progress bar.
  - Emit JSON on the the status file descriptor output instead of a custom
    format.
* Logging:
  - Use more-Pythonic logging functions and output based on __name__, etc.
  - Use Debian-style "I:", "D:" log level format modifier.
  - Only print milliseconds in output, not microseconds.
  - Print version in debug output so that saved debug outputs can standalone
    as bug reports.
* Profiling:
  - Also report the total number of method calls, not just the total time.
  - Report on the total wall clock taken to execute diffoscope, including
    cleanup.
* Tidying:
  - Rename "NonExisting" -> "Missing".
  - Entirely rework diffoscope.comparators module, splitting as many separate
    concerns into a different utility package, tidying imports, etc.
  - Split diffoscope.difference into diffoscope.diff, etc.
  - Update file references in debian/copyright post module reorganisation.
  - Many other cleanups, etc.
* Misc:
  - Clarify comment regarding why we call python3(1) directly. Thanks to J r my
    Bobbio <lunar@debian.org>.
  - Raise a clearer error if trying to use --html-dir on a file.
  - Fix --output-empty when files are identical and no outputs specified.
[ Reiner Herrmann ]
* Extend .apk recognition regex to also match zip archives (Closes: #849638)
[ Mattia Rizzolo ]
* Follow the rename of the Debian package "python-jsbeautifier" to
  "jsbeautifier".
[ siamezzze ]
* Fixed no newline being classified as order-like difference.
reprotest development reprotest 0.5 was uploaded to unstable by Chris Lamb. It included contributions from:
[ Ximin Luo ]
* Stop advertising variations that we're not actually varying.
  That is: domain_host, shell, user_group.
* Fix auto-presets in the case of a file in the current directory.
* Allow disabling build-path variations. (Closes: #833284)
* Add a faketime variation, with NO_FAKE_STAT=1 to avoid messing with
  various buildsystems. This is on by default; if it causes your builds
  to mess up please do file a bug report.
* Add a --store-dir option to save artifacts.
Other contributions (not yet uploaded): reproducible-builds.org website development tests.reproducible-builds.org Misc. This week's edition was written by Chris Lamb, Holger Levsen and Vagrant Cascadian, reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

3 January 2017

Maria Glukhova: Getting to know diffoscope better

I apologize to all potential readers of this blog for not writing a comprehensive Introduction post with details of the project I am taking part in during my internship, as well as some story about how I ended up there. Let me just say that I was a Debian user for years when I discovered it is taking part in Outreachy as one of organisations. Their Reproducible Builds effort has a noble goal and a bunch of great people behind it - I had no chances not to get excited by it. Looking for a place where my skills could be of any use, I discovered diffoscope - the tool for in-depth comparassion of files, archives etc. My mentor, Mattia Rizzolo, supported my decision to work on it, so now I am concentrating my efforts on improving diffoscope. As my first steps, I am doing small (but hopefully still somewhat important) job of fixing existing bugs. It helps me to better understand how diffoscope works, as well as introduces me to the workflow of opensource development. During December, I have done several small contributions, mostly fixing bugs.

Test data and jessie-backports First of them could be somewhat called cleaning up after my own mistake, although that mistake wasn t trivial. During the application period, I have fixed a bug with diffoscope failing while comparing symlinks to directory. That was a small change, but I included some tests for that case anyway. And that actually caused problems. With these tests, I included test data: two folders with symlinks. All was good in unstable version of Debian, but in jessie-backports, that commit caused build to fail. After some digging, I discovered the problem was caused by build process including copying that data. That was done using shutils Python module, and older version of that module, included in jessie, could not handle copying symlinks to directory properly. Thanks to my mentor for giving me a hint on how to resolve this: using temporary folders and creating these symlinks at runtime. That way, we ensured tests run without problems during build process on jessie. What have I learned: A great deal, actually. I spent too much time on that one, but I learned how to build packages, what happens during dpkg-buildpackage run and what debhelper tools are for. I also learned a bit about what chroot is and how to use it for testing.

ICC profile files and file type recognizing regexp Another one was also about failing tests and, therefore, failing build. Failing tests were all due to ICC files were not recognized by diffoscope. Turned out libmagic got an update which changed the description of ICC profile files. Diffoscope was relying on regexp applied to file type description to recognize the file, so I changed regexp to reflect the changes in libmagic. What have I learned: How diffoscope recognizes file types. Got me thinking: maybe there is a better way? That regexp-based approach is doomed to cause problems with every file type description change. I have this question still lingering in my mind - maybe I will come up with an idea later.

Order-like difference in text files Next, I decided to do something a bit bigger and fullfilled a feature request. That request was for detecting order-like difference in text files (when files has the same lines, but in different order). I did it by collecting added and removed lines in diff output in lists, sorting and then comparing them. Sadly, I forgot about one particular case - when one of the files is missing the newline at the end of file. I was kindly reminded of that quite soon in comments on the bug-tracker (thanks danielsh!) and have already fixed that. I also recieved feedback on how better implement it deeper in the diffoscope - not using the results of diff, but rather comparing sum of hashes of the lines directly in the difference module. I am yet to try that. What have I learned: That a call to diff is actually the slowest part of the diffoscope run when done on two big text files. Could it help somehow in speeding it up? I don t know yet. I also learned to comment on bugs in Debian bugtracker and was surprised by how much feedback I got. Thanks to my mentor for pushing me to do that - I definetely need to overcome my fear of communications to be more effective!

Random FTBFS There was also a very nasty bug that caused diffoscope to fail to be built from source randomly, failing with non-informative Fatal Python error: deallocated None. It already seemed strange when it was first reported; It got only more strange when suddenly that bug ceased to be reproducible. We hoped that would mean that bug was caused by some external tool, and was fixed there. Turns out it was not that easy. I tested this on two separate computers and on virtual machine; I used different versions of diffoscope. Well. Seems like that bug is still somehow tied to diffoscope version and not some external tool version - I still can do git checkout 64 and be able to reproduce the bug (still randomly, though). Although I spent quite a lot of time on that one, the only result was the information about connection between bug apperances and diffoscope version. I still wasn t able to get to the root of the problem - hopefully, someone else will be able to, given the information I found. What have I learned: git-bisect! Thanks to my friend for pointing me to it, that tool came handy in that situation. Also, got some experience in catching nasty bugs like that (pity that no experience in squashing them). I had some extra time commitements in December, one of them (Reproducible Builds Summit II) connected to my internship and one (my exam session in university) not. In January, I should be able to allocate more time to that work - I hope it will help me achieve more significant results. Many thanks to Mattia Rizzolo, Chris Lamb, Holger Levsen and all other folks of Reproducible Builds project - I cannot stress enough how important your support is to me. Wish you all a great 2017!

29 December 2016

Reproducible builds folks: Reproducible Builds: week 87 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday December 18 and Saturday December 24 2016: Media coverage 100% Of The 289 Coreboot Images Are Now Built Reproducibly by Phoronix, with more details in German by Pro-Linux.de. We have further reports on our Reproducible Builds World summit #2 in Berlin from Rok Garbas of NixOS as well as Clemens Lang of MacPorts Debian infrastructure work Dak now archives buildinfo files thanks to a patch from Chris Lamb. We also have mostly finalised a design of how they will be distributed by the Debian FTP mirror network which we will start implementing soon. This is great for the future of Debianb but unfortunately this also means that we won't have .buildinfo files for Stretch as Debian will not rebuild its source packages and because these binary packages currently in the archive were mostly built with dpkg > 1.18.11. reprepro/5.0.0-1 has added support for dealing with .buildinfo files that are included in .changes files. (Closes: #843402) Reproducible work in other projects The Chromium project is now working on making their build process (mostly) deterministic. Their motivation is to save both "[money] (less hardware is required) and developer time (reduced latency by having less work to do on the TS and CI)". Unreproducible bugs filed Reviews of unreproducible packages 39 package reviews have been added, 75 have been updated and 44 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been updated: Weekly QA work During our reproducibility testing, some FTBFS bugs have been detected and reported by: diffoscope development diffoscope 66 was uploaded to unstable by Chris Lamb. It included contributions from: strip-nondeterminism development strip-nondeterminism 0.029-1 was uploaded to unstable by Chris Lamb. It included no new content from this week, but rather included contributions from previous weeks. reproducible-website development The website is now also accessible via the https://www.reproducible-builds.org URL. tests.reproducible-builds.org Misc. This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC and the mailing lists.

Next.

Previous.